Data are collected from https://archive.ics.uci.edu/ml/datasets/wine+quality. Originally, there are two datasets, including red wine and white wine. However, in this project, I consider only the red wine data set. The two datasets are related to red and white variants of the Portuguese “Vinho Verde” wine. See [Cortez et al., 2009] for more details. In the source, there was a note that due to privacy and logistical issues, only physicochemical (inputs) and sensory (the output) variables are available, which means that there may be other variables that could be missing that could have helped our prediction to be more accurate than I could predict in this analysis. Therefore, bear in mind that there could be omitted variable bias in this analysis. The source mentioned that other omitted variables could include grape type, wine brand, wine selling price etc.
The raw dataset for red wine contains a total of 12 variables, including quality (score between 0 and 1), which is our response varible in our regression setting and 11 other physicochemical attributes as inputs. These include fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulphates, alcohol.
red_wine <- read.csv("winequality-red.csv")
summary(red_wine)## fixed_acidity volatile_acidity citric_acid residual_sugar
## Min. : 4.60 Min. :0.1200 Min. :0.000 Min. : 0.900
## 1st Qu.: 7.10 1st Qu.:0.3900 1st Qu.:0.090 1st Qu.: 1.900
## Median : 7.90 Median :0.5200 Median :0.260 Median : 2.200
## Mean : 8.32 Mean :0.5278 Mean :0.271 Mean : 2.539
## 3rd Qu.: 9.20 3rd Qu.:0.6400 3rd Qu.:0.420 3rd Qu.: 2.600
## Max. :15.90 Max. :1.5800 Max. :1.000 Max. :15.500
## chlorides free_sulfur_do2 tot_sulfur_do2 density
## Min. :0.01200 Min. : 1.00 Min. : 6.00 Min. :0.9901
## 1st Qu.:0.07000 1st Qu.: 7.00 1st Qu.: 22.00 1st Qu.:0.9956
## Median :0.07900 Median :14.00 Median : 38.00 Median :0.9968
## Mean :0.08747 Mean :15.87 Mean : 46.47 Mean :0.9967
## 3rd Qu.:0.09000 3rd Qu.:21.00 3rd Qu.: 62.00 3rd Qu.:0.9978
## Max. :0.61100 Max. :72.00 Max. :289.00 Max. :1.0037
## pH sulphates alcohol quality
## Min. :2.740 Min. :0.3300 Min. : 8.40 Min. :3.000
## 1st Qu.:3.210 1st Qu.:0.5500 1st Qu.: 9.50 1st Qu.:5.000
## Median :3.310 Median :0.6200 Median :10.20 Median :6.000
## Mean :3.311 Mean :0.6581 Mean :10.42 Mean :5.636
## 3rd Qu.:3.400 3rd Qu.:0.7300 3rd Qu.:11.10 3rd Qu.:6.000
## Max. :4.010 Max. :2.0000 Max. :14.90 Max. :8.000
#str(red_wine)The data set do not contain missing values. However, the variable citric_acid contains zero values. However, ideally, we should know whether they are indeed zeroes from the client.
boxplot(red_wine[, c(1:5)])boxplot(red_wine[, c(6:7)])boxplot(red_wine[, c(8:11)])# install.packages("GGally")
library(GGally)## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
ggpairs(red_wine, # Data frame
columns = 1:11) # ColumnsThe graph shows that features, including citric_acid, free_sulphur_do2, tot_sulphur_do2, and some others suffer from anomalies. Here, I use the following to cleanup some of the features using some of the preprocessing techniques. Anomalies can be problematic in accurately predicting the quality of the red wine. We will account for it by normalizing the features with Z-score normalization or standardization as it is robust to outliers/anomalies.
In this we look at the features to analyze if we have any non-variant features.
library(caret)## Loading required package: lattice
nzv_features <- nearZeroVar(red_wine[, -12], names = TRUE)
print(nzv_features)## character(0)
Based on the results, we do not have non-zero variant features
library(estimatr)
lr_mod1 <- lm_robust(quality ~ ., data = red_wine)
summary(lr_mod1)##
## Call:
## lm_robust(formula = quality ~ ., data = red_wine)
##
## Standard error type: HC2
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|) CI Lower CI Upper
## (Intercept) 21.965208 2.447e+01 0.8975 3.696e-01 -2.604e+01 69.968344
## fixed_acidity 0.024991 3.233e-02 0.7731 4.396e-01 -3.842e-02 0.088398
## volatile_acidity -1.083590 1.369e-01 -7.9157 4.578e-15 -1.352e+00 -0.815084
## citric_acid -0.182564 1.527e-01 -1.1956 2.320e-01 -4.821e-01 0.116935
## residual_sugar 0.016331 1.917e-02 0.8518 3.944e-01 -2.127e-02 0.053936
## chlorides -1.874225 4.811e-01 -3.8961 1.018e-04 -2.818e+00 -0.930651
## free_sulfur_do2 0.004361 2.242e-03 1.9452 5.193e-02 -3.647e-05 0.008759
## tot_sulfur_do2 -0.003265 7.564e-04 -4.3160 1.687e-05 -4.748e-03 -0.001781
## density -17.881164 2.504e+01 -0.7141 4.753e-01 -6.700e+01 31.235364
## pH -0.413653 2.135e-01 -1.9376 5.284e-02 -8.324e-01 0.005084
## sulphates 0.916334 1.346e-01 6.8056 1.422e-11 6.522e-01 1.180434
## alcohol 0.276198 2.905e-02 9.5089 6.851e-21 2.192e-01 0.333171
## DF
## (Intercept) 1587
## fixed_acidity 1587
## volatile_acidity 1587
## citric_acid 1587
## residual_sugar 1587
## chlorides 1587
## free_sulfur_do2 1587
## tot_sulfur_do2 1587
## density 1587
## pH 1587
## sulphates 1587
## alcohol 1587
##
## Multiple R-squared: 0.3606 , Adjusted R-squared: 0.3561
## F-statistic: 78.12 on 11 and 1587 DF, p-value: < 2.2e-16
lr_mod2 <- lm_robust(quality ~ volatile_acidity + chlorides + tot_sulfur_do2 + pH + sulphates + alcohol, data = red_wine)
summary(lr_mod2)##
## Call:
## lm_robust(formula = quality ~ volatile_acidity + chlorides +
## tot_sulfur_do2 + pH + sulphates + alcohol, data = red_wine)
##
## Standard error type: HC2
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|) CI Lower CI Upper
## (Intercept) 4.295732 0.4361275 9.850 2.931e-22 3.440287 5.151176
## volatile_acidity -1.038195 0.1147642 -9.046 4.191e-19 -1.263299 -0.813090
## chlorides -2.002284 0.4552740 -4.398 1.165e-05 -2.895283 -1.109284
## tot_sulfur_do2 -0.002372 0.0005154 -4.603 4.505e-06 -0.003383 -0.001361
## pH -0.435183 0.1278682 -3.403 6.821e-04 -0.685991 -0.184375
## sulphates 0.888680 0.1318932 6.738 2.239e-11 0.629978 1.147383
## alcohol 0.290674 0.0196687 14.778 2.068e-46 0.252095 0.329253
## DF
## (Intercept) 1592
## volatile_acidity 1592
## chlorides 1592
## tot_sulfur_do2 1592
## pH 1592
## sulphates 1592
## alcohol 1592
##
## Multiple R-squared: 0.3572 , Adjusted R-squared: 0.3548
## F-statistic: 144.1 on 6 and 1592 DF, p-value: < 2.2e-16
As you saw from lr_mod2 with less features (after removing perceived irrelevant features) perform equally well (based on Adjusted \(R^2\)) compared with the lr_mod1 with all the features. Moving forward, we will consider only the features that are important in predicting the quality of red wine. That is, we use only the features that were included in the lr_mod2.
From the boxplots shown above, almost all of them has outliers at some level of degree. Therefore, we need to standardize the features.
red_wine_scaled <- scale(red_wine[, -12])
boxplot(red_wine_scaled[, c(1:5)])boxplot(red_wine_scaled[, c(6:7)])boxplot(red_wine_scaled[, c(8:11)])all_red_wine_scaled <- data.frame(cbind(red_wine_scaled, red_wine$quality))
#str(all_red_wine_scaled)
colnames(all_red_wine_scaled)[12] <- "quality"
summary(all_red_wine_scaled)## fixed_acidity volatile_acidity citric_acid residual_sugar
## Min. :-2.1364 Min. :-2.27757 Min. :-1.39104 Min. :-1.1623
## 1st Qu.:-0.7005 1st Qu.:-0.76969 1st Qu.:-0.92903 1st Qu.:-0.4531
## Median :-0.2410 Median :-0.04367 Median :-0.05634 Median :-0.2403
## Mean : 0.0000 Mean : 0.00000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.5056 3rd Qu.: 0.62649 3rd Qu.: 0.76501 3rd Qu.: 0.0434
## Max. : 4.3538 Max. : 5.87614 Max. : 3.74240 Max. : 9.1928
## chlorides free_sulfur_do2 tot_sulfur_do2 density
## Min. :-1.60344 Min. :-1.4221 Min. :-1.2302 Min. :-3.53762
## 1st Qu.:-0.37111 1st Qu.:-0.8485 1st Qu.:-0.7438 1st Qu.:-0.60757
## Median :-0.17989 Median :-0.1792 Median :-0.2574 Median : 0.00176
## Mean : 0.00000 Mean : 0.0000 Mean : 0.0000 Mean : 0.00000
## 3rd Qu.: 0.05383 3rd Qu.: 0.4900 3rd Qu.: 0.4722 3rd Qu.: 0.57664
## Max. :11.12355 Max. : 5.3656 Max. : 7.3728 Max. : 3.67890
## pH sulphates alcohol quality
## Min. :-3.69924 Min. :-1.9359 Min. :-1.8983 Min. :3.000
## 1st Qu.:-0.65494 1st Qu.:-0.6380 1st Qu.:-0.8661 1st Qu.:5.000
## Median :-0.00721 Median :-0.2251 Median :-0.2092 Median :6.000
## Mean : 0.00000 Mean : 0.0000 Mean : 0.0000 Mean :5.636
## 3rd Qu.: 0.57574 3rd Qu.: 0.4239 3rd Qu.: 0.6353 3rd Qu.:6.000
## Max. : 4.52687 Max. : 7.9162 Max. : 4.2011 Max. :8.000
#write.csv(all_red_wine_scaled,"/Users/prithvirajlakkakula/Desktop/Regression-RedWineQuality/all_red_wine_scaled.csv", row.names = FALSE)
#/Users/prithvirajlakkakula/Desktop/Regression-RedWineQualityBased on the boxplots, all the features are brought to the same scale.
library(h2o)##
## ----------------------------------------------------------------------
##
## Your next step is to start H2O:
## > h2o.init()
##
## For H2O package documentation, ask for help:
## > ??h2o
##
## After starting H2O, you can use the Web UI at http://localhost:54321
## For more information visit https://docs.h2o.ai
##
## ----------------------------------------------------------------------
##
## Attaching package: 'h2o'
## The following objects are masked from 'package:stats':
##
## cor, sd, var
## The following objects are masked from 'package:base':
##
## &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,
## colnames<-, ifelse, is.character, is.factor, is.numeric, log,
## log10, log1p, log2, round, signif, trunc
h2o.init()## Connection successful!
##
## R is connected to the H2O cluster:
## H2O cluster uptime: 12 minutes 29 seconds
## H2O cluster timezone: America/Chicago
## H2O data parsing timezone: UTC
## H2O cluster version: 3.36.0.2
## H2O cluster version age: 2 months and 2 days
## H2O cluster name: H2O_started_from_R_prithvirajlakkakula_uyg756
## H2O cluster total nodes: 1
## H2O cluster total memory: 1.67 GB
## H2O cluster total cores: 4
## H2O cluster allowed cores: 4
## H2O cluster healthy: TRUE
## H2O Connection ip: localhost
## H2O Connection port: 54321
## H2O Connection proxy: NA
## H2O Internal Security: FALSE
## H2O API Extensions: Amazon S3, XGBoost, Algos, Infogram, AutoML, Core V3, TargetEncoder, Core V4
## R Version: R version 4.1.2 (2021-11-01)
df <- h2o.importFile("/Users/prithvirajlakkakula/Desktop/GitHubProjects/Regression-RedWineQuality/all_red_wine_scaled.csv")##
|
| | 0%
|
|========================================================= | 81%
|
|======================================================================| 100%
h2o.describe(df)## Label Type Missing Zeros PosInf NegInf Min Max
## 1 fixed_acidity real 0 0 0 0 -2.136377 4.353787
## 2 volatile_acidity real 0 0 0 0 -2.277567 5.876138
## 3 citric_acid real 0 0 0 0 -1.391037 3.742403
## 4 residual_sugar real 0 0 0 0 -1.162333 9.192806
## 5 chlorides real 0 0 0 0 -1.603443 11.123555
## 6 free_sulfur_do2 real 0 0 0 0 -1.422055 5.365606
## 7 tot_sulfur_do2 real 0 0 0 0 -1.230199 7.372847
## 8 density real 0 0 0 0 -3.537625 3.678904
## 9 pH real 0 0 0 0 -3.699244 4.526866
## 10 sulphates real 0 0 0 0 -1.935902 7.916200
## 11 alcohol real 0 0 0 0 -1.898325 4.201138
## 12 quality int 0 0 0 0 3.000000 8.000000
## Mean Sigma Cardinality
## 1 2.488455e-16 1.0000000 NA
## 2 2.132961e-16 1.0000000 NA
## 3 7.109871e-17 1.0000000 NA
## 4 -7.998605e-17 1.0000000 NA
## 5 5.332403e-17 1.0000000 NA
## 6 0.000000e+00 1.0000000 NA
## 7 1.155354e-16 1.0000000 NA
## 8 2.420911e-14 1.0000000 NA
## 9 -5.332403e-17 1.0000000 NA
## 10 -7.109871e-17 1.0000000 NA
## 11 -5.332403e-17 1.0000000 NA
## 12 5.636023e+00 0.8075694 NA
y <- "quality"
data_splits <- h2o.splitFrame(df, ratios = 0.75, seed = 143)
training <- data_splits[[1]]
testing <- data_splits[[2]]auto_ml <- h2o.automl(y = y,
training_frame = training,
leaderboard_frame = testing,
max_runtime_secs = 500,
seed = 143)#,##
|
| | 0%
## 13:20:38.632: Project: AutoML_3_20220328_132038
## 13:20:38.632: 5-fold cross-validation will be used.
## 13:20:38.638: Setting stopping tolerance adaptively based on the training frame: 0.02880756009578387
## 13:20:38.638: Build control seed: 143
## 13:20:38.638: training frame: Frame key: AutoML_3_20220328_132038_training_RTMP_sid_940e_2 cols: 12 rows: 1205 chunks: 16 size: 98497 checksum: 8155063905397532
## 13:20:38.638: validation frame: NULL
## 13:20:38.639: leaderboard frame: Frame key: RTMP_sid_940e_4 cols: 12 rows: 394 chunks: 16 size: 47642 checksum: -2075682014944944419
## 13:20:38.639: blending frame: NULL
## 13:20:38.639: response column: quality
## 13:20:38.639: fold column: null
## 13:20:38.639: weights column: null
## 13:20:38.639: Loading execution steps: [{XGBoost : [def_2 (1g, 10w), def_1 (2g, 10w), def_3 (3g, 10w), grid_1 (4g, 90w), lr_search (6g, 30w)]}, {GLM : [def_1 (1g, 10w)]}, {DRF : [def_1 (2g, 10w), XRT (3g, 10w)]}, {GBM : [def_5 (1g, 10w), def_2 (2g, 10w), def_3 (2g, 10w), def_4 (2g, 10w), def_1 (3g, 10w), grid_1 (4g, 60w), lr_annealing (6g, 10w)]}, {DeepLearning : [def_1 (3g, 10w), grid_1 (4g, 30w), grid_2 (5g, 30w), grid_3 (5g, 30w)]}, {completion : [resume_best_grids (10g, 60w)]}, {StackedEnsemble : [best_of_family_1 (1g, 5w), best_of_family_2 (2g, 5w), best_of_family_3 (3g, 5w), best_of_family_4 (4g, 5w), best_of_family_5 (5g, 5w), all_2 (2g, 10w), all_3 (3g, 10w), all_4 (4g, 10w), all_5 (5g, 10w), monotonic (6g, 10w), best_of_family_xgboost (6g, 10w), best_of_family_gbm (6g, 10w), all_xgboost (7g, 10w), all_gbm (7g, 10w), best_of_family_xglm (8g, 10w), all_xglm (8g, 10w), best_of_family (10g, 10w), best_N (10g, 10w)]}]
## 13:20:38.641: Defined work allocations: [Work{def_2, XGBoost, ModelBuild, group=1, weight=10}, Work{def_1, GLM, ModelBuild, group=1, weight=10}, Work{def_5, GBM, ModelBuild, group=1, weight=10}, Work{best_of_family_1, StackedEnsemble, ModelBuild, group=1, weight=5}, Work{def_1, XGBoost, ModelBuild, group=2, weight=10}, Work{def_1, DRF, ModelBuild, group=2, weight=10}, Work{def_2, GBM, ModelBuild, group=2, weight=10}, Work{def_3, GBM, ModelBuild, group=2, weight=10}, Work{def_4, GBM, ModelBuild, group=2, weight=10}, Work{best_of_family_2, StackedEnsemble, ModelBuild, group=2, weight=5}, Work{all_2, StackedEnsemble, ModelBuild, group=2, weight=10}, Work{def_3, XGBoost, ModelBuild, group=3, weight=10}, Work{XRT, DRF, ModelBuild, group=3, weight=10}, Work{def_1, GBM, ModelBuild, group=3, weight=10}, Work{def_1, DeepLearning, ModelBuild, group=3, weight=10}, Work{best_of_family_3, StackedEnsemble, ModelBuild, group=3, weight=5}, Work{all_3, StackedEnsemble, ModelBuild, group=3, weight=10}, Work{grid_1, XGBoost, HyperparamSearch, group=4, weight=90}, Work{grid_1, GBM, HyperparamSearch, group=4, weight=60}, Work{grid_1, DeepLearning, HyperparamSearch, group=4, weight=30}, Work{best_of_family_4, StackedEnsemble, ModelBuild, group=4, weight=5}, Work{all_4, StackedEnsemble, ModelBuild, group=4, weight=10}, Work{grid_2, DeepLearning, HyperparamSearch, group=5, weight=30}, Work{grid_3, DeepLearning, HyperparamSearch, group=5, weight=30}, Work{best_of_family_5, StackedEnsemble, ModelBuild, group=5, weight=5}, Work{all_5, StackedEnsemble, ModelBuild, group=5, weight=10}, Work{lr_search, XGBoost, Selection, group=6, weight=30}, Work{lr_annealing, GBM, Selection, group=6, weight=10}, Work{monotonic, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{best_of_family_xgboost, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{best_of_family_gbm, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{all_xgboost, StackedEnsemble, ModelBuild, group=7, weight=10}, Work{all_gbm, StackedEnsemble, ModelBuild, group=7, weight=10}, Work{best_of_family_xglm, StackedEnsemble, ModelBuild, group=8, weight=10}, Work{all_xglm, StackedEnsemble, ModelBuild, group=8, weight=10}, Work{resume_best_grids, virtual, Dynamic, group=10, weight=60}, Work{best_of_family, StackedEnsemble, ModelBuild, group=10, weight=10}, Work{best_N, StackedEnsemble, ModelBuild, group=10, weight=10}]
## 13:20:38.641: Actual work allocations: [Work{def_2, XGBoost, ModelBuild, group=1, weight=10}, Work{def_1, GLM, ModelBuild, group=1, weight=10}, Work{def_5, GBM, ModelBuild, group=1, weight=10}, Work{best_of_family_1, StackedEnsemble, ModelBuild, group=1, weight=5}, Work{def_1, XGBoost, ModelBuild, group=2, weight=10}, Work{def_1, DRF, ModelBuild, group=2, weight=10}, Work{def_2, GBM, ModelBuild, group=2, weight=10}, Work{def_3, GBM, ModelBuild, group=2, weight=10}, Work{def_4, GBM, ModelBuild, group=2, weight=10}, Work{best_of_family_2, StackedEnsemble, ModelBuild, group=2, weight=5}, Work{all_2, StackedEnsemble, ModelBuild, group=2, weight=10}, Work{def_3, XGBoost, ModelBuild, group=3, weight=10}, Work{XRT, DRF, ModelBuild, group=3, weight=10}, Work{def_1, GBM, ModelBuild, group=3, weight=10}, Work{def_1, DeepLearning, ModelBuild, group=3, weight=10}, Work{best_of_family_3, StackedEnsemble, ModelBuild, group=3, weight=5}, Work{all_3, StackedEnsemble, ModelBuild, group=3, weight=10}, Work{grid_1, XGBoost, HyperparamSearch, group=4, weight=90}, Work{grid_1, GBM, HyperparamSearch, group=4, weight=60}, Work{grid_1, DeepLearning, HyperparamSearch, group=4, weight=30}, Work{best_of_family_4, StackedEnsemble, ModelBuild, group=4, weight=5}, Work{all_4, StackedEnsemble, ModelBuild, group=4, weight=10}, Work{grid_2, DeepLearning, HyperparamSearch, group=5, weight=30}, Work{grid_3, DeepLearning, HyperparamSearch, group=5, weight=30}, Work{best_of_family_5, StackedEnsemble, ModelBuild, group=5, weight=5}, Work{all_5, StackedEnsemble, ModelBuild, group=5, weight=10}, Work{lr_search, XGBoost, Selection, group=6, weight=30}, Work{lr_annealing, GBM, Selection, group=6, weight=10}, Work{monotonic, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{best_of_family_xgboost, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{best_of_family_gbm, StackedEnsemble, ModelBuild, group=6, weight=10}, Work{all_xgboost, StackedEnsemble, ModelBuild, group=7, weight=10}, Work{all_gbm, StackedEnsemble, ModelBuild, group=7, weight=10}, Work{best_of_family_xglm, StackedEnsemble, ModelBuild, group=8, weight=10}, Work{all_xglm, StackedEnsemble, ModelBuild, group=8, weight=10}, Work{resume_best_grids, virtual, Dynamic, group=10, weight=60}, Work{best_of_family, StackedEnsemble, ModelBuild, group=10, weight=10}, Work{best_N, StackedEnsemble, ModelBuild, group=10, weight=10}]
## 13:20:38.641: AutoML job created: 2022.03.28 13:20:38.632
## 13:20:38.643: AutoML build started: 2022.03.28 13:20:38.643
## 13:20:38.644: Time assigned for XGBoost_1_AutoML_3_20220328_132038: 142.856859375s
## 13:20:38.644: AutoML: starting XGBoost_1_AutoML_3_20220328_132038 model training
## 13:20:38.644: XGBoost_1_AutoML_3_20220328_132038 [XGBoost def_2] started
## 13:20:39.648: XGBoost_1_AutoML_3_20220328_132038 [XGBoost def_2] complete
## 13:20:39.648: Adding model XGBoost_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:39.651: New leader: XGBoost_1_AutoML_3_20220328_132038, mean_residual_deviance: 0.4147095236419399
## 13:20:39.651: Time assigned for GLM_1_AutoML_3_20220328_132038: 199.596796875s
## 13:20:39.652: AutoML: starting GLM_1_AutoML_3_20220328_132038 model training
## 13:20:39.652: GLM_1_AutoML_3_20220328_132038 [GLM def_1] started
|
|= | 2%
## 13:20:40.653: GLM_1_AutoML_3_20220328_132038 [GLM def_1] complete
## 13:20:40.653: Adding model GLM_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:40.657: Time assigned for GBM_1_AutoML_3_20220328_132038: 331.9906875s
## 13:20:40.657: AutoML: starting GBM_1_AutoML_3_20220328_132038 model training
## 13:20:40.658: GBM_1_AutoML_3_20220328_132038 [GBM def_5] started
|
|== | 3%
## 13:20:41.658: GBM_1_AutoML_3_20220328_132038 [GBM def_5] complete
## 13:20:41.658: Adding model GBM_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:41.664: Time assigned for StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038: 496.979s
## 13:20:41.665: AutoML: starting StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038 model training
## 13:20:41.665: StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_1 (built with AUTO metalearner, using top model from each algorithm type)] started
|
|=== | 5%
## 13:20:42.670: StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_1 (built with AUTO metalearner, using top model from each algorithm type)] complete
## 13:20:42.670: Adding model StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:42.677: New leader: StackedEnsemble_BestOfFamily_1_AutoML_3_20220328_132038, mean_residual_deviance: 0.3944254033445526
## 13:20:42.678: Time assigned for XGBoost_2_AutoML_3_20220328_132038: 76.3023125s
## 13:20:42.678: AutoML: starting XGBoost_2_AutoML_3_20220328_132038 model training
## 13:20:42.678: XGBoost_2_AutoML_3_20220328_132038 [XGBoost def_1] started
|
|==== | 6%
## 13:20:43.682: XGBoost_2_AutoML_3_20220328_132038 [XGBoost def_1] complete
## 13:20:43.682: Adding model XGBoost_2_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:43.687: Time assigned for DRF_1_AutoML_3_20220328_132038: 89.992s
## 13:20:43.687: AutoML: starting DRF_1_AutoML_3_20220328_132038 model training
## 13:20:43.687: DRF_1_AutoML_3_20220328_132038 [DRF def_1] started
|
|===== | 7%
|
|====== | 9%
## 13:20:45.689: DRF_1_AutoML_3_20220328_132038 [DRF def_1] complete
## 13:20:45.689: Adding model DRF_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=1s
## 13:20:45.696: New leader: DRF_1_AutoML_3_20220328_132038, mean_residual_deviance: 0.3711118793922384
## 13:20:45.696: Time assigned for GBM_2_AutoML_3_20220328_132038: 109.54378125s
## 13:20:45.697: AutoML: starting GBM_2_AutoML_3_20220328_132038 model training
## 13:20:45.697: GBM_2_AutoML_3_20220328_132038 [GBM def_2] started
## 13:20:46.702: GBM_2_AutoML_3_20220328_132038 [GBM def_2] complete
## 13:20:46.702: Adding model GBM_2_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:46.706: Time assigned for GBM_3_AutoML_3_20220328_132038: 140.5534375s
## 13:20:46.706: AutoML: starting GBM_3_AutoML_3_20220328_132038 model training
## 13:20:46.706: GBM_3_AutoML_3_20220328_132038 [GBM def_3] started
|
|======= | 11%
## 13:20:47.707: GBM_3_AutoML_3_20220328_132038 [GBM def_3] complete
## 13:20:47.707: Adding model GBM_3_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:47.713: Time assigned for GBM_4_AutoML_3_20220328_132038: 196.372s
## 13:20:47.713: AutoML: starting GBM_4_AutoML_3_20220328_132038 model training
## 13:20:47.713: GBM_4_AutoML_3_20220328_132038 [GBM def_4] started
|
|========= | 12%
## 13:20:48.717: GBM_4_AutoML_3_20220328_132038 [GBM def_4] complete
## 13:20:48.717: Adding model GBM_4_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:48.724: Time assigned for StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038: 163.30634375s
## 13:20:48.724: AutoML: starting StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038 model training
## 13:20:48.724: StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_2 (built with AUTO metalearner, using top model from each algorithm type)] started
|
|========== | 14%
## 13:20:49.727: StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_2 (built with AUTO metalearner, using top model from each algorithm type)] complete
## 13:20:49.727: Adding model StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:49.739: New leader: StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038, mean_residual_deviance: 0.3606941318460503
## 13:20:49.739: Time assigned for StackedEnsemble_AllModels_1_AutoML_3_20220328_132038: 488.904s
## 13:20:49.739: AutoML: starting StackedEnsemble_AllModels_1_AutoML_3_20220328_132038 model training
## 13:20:49.739: StackedEnsemble_AllModels_1_AutoML_3_20220328_132038 [StackedEnsemble all_2 (built with AUTO metalearner, using all AutoML models)] started
|
|========== | 15%
## 13:20:50.739: StackedEnsemble_AllModels_1_AutoML_3_20220328_132038 [StackedEnsemble all_2 (built with AUTO metalearner, using all AutoML models)] complete
## 13:20:50.739: Adding model StackedEnsemble_AllModels_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:50.750: Time assigned for XGBoost_3_AutoML_3_20220328_132038: 88.7078203125s
## 13:20:50.750: AutoML: starting XGBoost_3_AutoML_3_20220328_132038 model training
## 13:20:50.750: XGBoost_3_AutoML_3_20220328_132038 [XGBoost def_3] started
|
|=========== | 16%
## 13:20:51.754: XGBoost_3_AutoML_3_20220328_132038 [XGBoost def_3] complete
## 13:20:51.754: Adding model XGBoost_3_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:51.757: Time assigned for XRT_1_AutoML_3_20220328_132038: 108.196890625s
## 13:20:51.758: AutoML: starting XRT_1_AutoML_3_20220328_132038 model training
## 13:20:51.758: XRT_1_AutoML_3_20220328_132038 [DRF XRT (Extremely Randomized Trees)] started
|
|============= | 18%
|
|============== | 19%
## 13:20:53.763: XRT_1_AutoML_3_20220328_132038 [DRF XRT (Extremely Randomized Trees)] complete
## 13:20:53.764: Adding model XRT_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=1s
## 13:20:53.770: Time assigned for GBM_5_AutoML_3_20220328_132038: 138.53515625s
## 13:20:53.770: AutoML: starting GBM_5_AutoML_3_20220328_132038 model training
## 13:20:53.770: GBM_5_AutoML_3_20220328_132038 [GBM def_1] started
|
|============== | 20%
## 13:20:54.773: GBM_5_AutoML_3_20220328_132038 [GBM def_1] complete
## 13:20:54.773: Adding model GBM_5_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:54.780: Time assigned for DeepLearning_1_AutoML_3_20220328_132038: 193.545203125s
## 13:20:54.781: AutoML: starting DeepLearning_1_AutoML_3_20220328_132038 model training
## 13:20:54.781: DeepLearning_1_AutoML_3_20220328_132038 [DeepLearning def_1] started
|
|=============== | 21%
## 13:20:55.784: DeepLearning_1_AutoML_3_20220328_132038 [DeepLearning def_1] complete
## 13:20:55.784: Adding model DeepLearning_1_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:55.789: Time assigned for StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038: 160.95134375s
## 13:20:55.789: AutoML: starting StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038 model training
## 13:20:55.789: StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_3 (built with AUTO metalearner, using top model from each algorithm type)] started
|
|================ | 23%
## 13:20:56.792: StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_3 (built with AUTO metalearner, using top model from each algorithm type)] complete
## 13:20:56.792: Adding model StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:56.810: Time assigned for StackedEnsemble_AllModels_2_AutoML_3_20220328_132038: 481.833s
## 13:20:56.810: AutoML: starting StackedEnsemble_AllModels_2_AutoML_3_20220328_132038 model training
## 13:20:56.811: StackedEnsemble_AllModels_2_AutoML_3_20220328_132038 [StackedEnsemble all_3 (built with AUTO metalearner, using all AutoML models)] started
|
|================= | 24%
## 13:20:57.813: StackedEnsemble_AllModels_2_AutoML_3_20220328_132038 [StackedEnsemble all_3 (built with AUTO metalearner, using all AutoML models)] complete
## 13:20:57.813: Adding model StackedEnsemble_AllModels_2_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:57.830: Time assigned for XGBoost_grid_1_AutoML_3_20220328_132038: 221.9136875s
## 13:20:57.830: AutoML: starting XGBoost_grid_1_AutoML_3_20220328_132038 hyperparameter search
## 13:20:57.831: XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search] started
|
|================== | 25%
## 13:20:58.833: Built: 1 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:20:58.833: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_1 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:20:59.841: Built: 3 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:20:59.841: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_2 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=1s
## 13:20:59.841: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_3 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:01.45: Built: 5 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:01.45: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_4 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:01.45: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_5 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
|
|================== | 26%
## 13:21:02.49: Built: 7 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:02.49: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_6 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:02.49: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_7 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:03.55: Built: 9 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:03.55: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_8 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:03.55: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_9 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:04.59: Built: 11 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:04.59: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_11 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:04.59: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_10 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:05.63: Built: 14 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:05.63: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_13 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:05.63: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_14 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:05.63: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_12 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:06.79: Built: 16 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:06.79: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_15 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:06.79: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_16 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:07.85: Built: 19 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:07.85: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_17 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:07.85: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_18 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:07.85: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_19 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:08.92: Built: 22 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:08.93: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_20 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:08.93: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_22 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:08.93: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_21 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:09.170: Built: 23 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:09.170: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_23 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:09.174: New leader: XGBoost_grid_1_AutoML_3_20220328_132038_model_23, mean_residual_deviance: 0.35865798276541583
## 13:21:10.179: Built: 25 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:10.179: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_25 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:10.179: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_24 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:11.195: Built: 27 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:11.195: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_26 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:11.195: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_27 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:12.218: Built: 29 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:12.218: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_29 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:12.218: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_28 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:13.242: Built: 31 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:13.242: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_30 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:13.242: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_31 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:14.247: Built: 33 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:14.248: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_32 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:14.248: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_33 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:15.260: Built: 34 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:15.260: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_34 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
|
|=================== | 27%
## 13:21:16.269: Built: 36 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:16.269: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_35 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:16.269: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_36 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:17.278: Built: 37 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:17.278: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_37 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:18.286: Built: 40 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:18.286: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_40 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:18.286: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_38 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:18.286: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_39 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:19.297: Built: 43 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:19.297: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_41 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:19.297: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_43 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:19.297: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_42 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:20.310: Built: 45 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:20.311: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_45 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:20.311: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_44 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:21.317: Built: 47 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:21.317: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_46 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:21.317: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_47 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:22.341: Built: 49 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:22.342: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_49 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:22.342: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_48 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:23.365: Built: 50 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:23.365: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_50 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:24.376: Built: 53 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:24.376: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_52 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:24.376: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_51 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:24.376: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_53 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:25.405: Built: 55 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:25.406: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_55 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:25.406: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_54 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:26.421: Built: 57 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:26.422: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_56 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:26.422: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_57 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:27.433: Built: 59 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:27.433: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_59 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:27.433: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_58 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:28.441: Built: 61 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:28.441: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_60 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:28.441: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_61 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:29.448: Built: 64 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:29.448: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_64 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:29.448: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_63 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:29.448: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_62 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:30.461: Built: 67 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:30.462: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_66 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:30.462: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_65 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:30.462: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_67 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:31.491: Built: 68 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:31.492: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_68 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:32.501: XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search] complete
## 13:21:32.501: Built: 70 models for HyperparamSearch : XGBoost_grid_1_AutoML_3_20220328_132038 [XGBoost Grid Search]
## 13:21:32.502: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_70 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:32.502: Adding model XGBoost_grid_1_AutoML_3_20220328_132038_model_69 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:32.514: Time assigned for GBM_grid_1_AutoML_3_20220328_132038: 254.9314375s
## 13:21:32.514: AutoML: starting GBM_grid_1_AutoML_3_20220328_132038 hyperparameter search
## 13:21:32.515: GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search] started
|
|============================ | 40%
## 13:21:33.519: Built: 1 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:33.519: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_1 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:34.529: Built: 3 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:34.529: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_3 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:34.529: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_2 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:35.559: Built: 4 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:35.559: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_4 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=1s
## 13:21:36.579: Built: 5 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:36.579: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_5 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:37.591: Built: 7 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:37.591: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_7 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:37.591: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_6 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:38.621: Built: 8 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:38.621: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_8 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:39.633: Built: 10 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:39.633: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_10 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:39.633: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_9 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:40.652: Built: 12 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:40.652: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_11 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:40.652: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_12 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
|
|=================================== | 50%
## 13:21:41.671: Built: 13 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:41.672: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_13 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:42.692: Built: 15 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:42.693: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_14 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:42.693: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_15 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:21:43.713: GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search] complete
## 13:21:43.713: Built: 16 models for HyperparamSearch : GBM_grid_1_AutoML_3_20220328_132038 [GBM Grid Search]
## 13:21:43.713: Adding model GBM_grid_1_AutoML_3_20220328_132038_model_16 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=1s
## 13:21:43.728: Time assigned for DeepLearning_grid_1_AutoML_3_20220328_132038: 289.94334375s
## 13:21:43.728: AutoML: starting DeepLearning_grid_1_AutoML_3_20220328_132038 hyperparameter search
## 13:21:43.728: DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search] started
|
|=================================== | 51%
|
|==================================== | 51%
## 13:23:02.929: Built: 1 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:23:02.929: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_1 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=22s, total=78s
|
|==================================== | 52%
|
|===================================== | 52%
|
|===================================== | 53%
## 13:24:37.257: Built: 2 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:24:37.257: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_2 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=21s, total=95s
|
|===================================== | 54%
|
|====================================== | 54%
|
|====================================== | 55%
|
|======================================= | 55%
|
|======================================= | 56%
|
|======================================== | 56%
|
|======================================== | 57%
|
|======================================== | 58%
|
|========================================= | 58%
|
|========================================= | 59%
|
|========================================== | 59%
|
|========================================== | 60%
|
|========================================== | 61%
|
|=========================================== | 61%
|
|=========================================== | 62%
|
|============================================ | 62%
|
|============================================ | 63%
|
|============================================= | 64%
## 13:25:57.718: Built: 3 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:25:57.719: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_3 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=17s, total=80s
|
|============================================= | 65%
|
|============================================== | 65%
|
|============================================== | 66%
|
|=============================================== | 67%
|
|=============================================== | 68%
|
|================================================ | 68%
|
|================================================ | 69%
## 13:26:24.878: Built: 4 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:26:24.878: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_4 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=2s, total=26s
|
|================================================= | 69%
|
|================================================= | 70%
## 13:26:30.924: Built: 5 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:26:30.924: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_5 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=6s
|
|================================================= | 71%
|
|================================================== | 71%
## 13:26:32.970: Built: 6 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:26:32.970: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_6 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=2s
## 13:26:34.0: DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search] complete
## 13:26:34.0: Built: 8 models for HyperparamSearch : DeepLearning_grid_1_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:26:34.0: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_7 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:26:34.0: Adding model DeepLearning_grid_1_AutoML_3_20220328_132038_model_8 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:26:34.15: Time assigned for StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038: 48.2093359375s
## 13:26:34.15: AutoML: starting StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038 model training
## 13:26:34.16: StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_4 (built with AUTO metalearner, using top model from each algorithm type)] started
## 13:26:35.16: StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_4 (built with AUTO metalearner, using top model from each algorithm type)] complete
## 13:26:35.16: Adding model StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:26:35.36: New leader: StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038, mean_residual_deviance: 0.34700338709977885
## 13:26:35.37: Time assigned for StackedEnsemble_AllModels_3_AutoML_3_20220328_132038: 143.606s
## 13:26:35.37: AutoML: starting StackedEnsemble_AllModels_3_AutoML_3_20220328_132038 model training
## 13:26:35.37: StackedEnsemble_AllModels_3_AutoML_3_20220328_132038 [StackedEnsemble all_4 (built with AUTO metalearner, using all AutoML models)] started
## 13:26:36.42: StackedEnsemble_AllModels_3_AutoML_3_20220328_132038 [StackedEnsemble all_4 (built with AUTO metalearner, using all AutoML models)] complete
## 13:26:36.42: Adding model StackedEnsemble_AllModels_3_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:26:36.88: New leader: StackedEnsemble_AllModels_3_AutoML_3_20220328_132038, mean_residual_deviance: 0.3418523248723318
## 13:26:36.89: Time assigned for DeepLearning_grid_2_AutoML_3_20220328_132038: 57.0216015625s
## 13:26:36.89: AutoML: starting DeepLearning_grid_2_AutoML_3_20220328_132038 hyperparameter search
## 13:26:36.89: DeepLearning_grid_2_AutoML_3_20220328_132038 [DeepLearning Grid Search] started
|
|================================================== | 72%
|
|=================================================== | 72%
|
|=================================================== | 73%
|
|==================================================== | 74%
|
|==================================================== | 75%
|
|===================================================== | 75%
|
|===================================================== | 76%
|
|====================================================== | 76%
|
|====================================================== | 77%
|
|====================================================== | 78%
|
|======================================================= | 78%
|
|======================================================= | 79%
|
|======================================================== | 79%
|
|======================================================== | 80%
|
|======================================================== | 81%
|
|========================================================= | 81%
## 13:27:23.223: Built: 1 models for HyperparamSearch : DeepLearning_grid_2_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:27:23.223: Adding model DeepLearning_grid_2_AutoML_3_20220328_132038_model_1 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=8s, total=46s
|
|========================================================= | 82%
|
|========================================================== | 82%
## 13:27:30.299: Built: 2 models for HyperparamSearch : DeepLearning_grid_2_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:27:30.299: Adding model DeepLearning_grid_2_AutoML_3_20220328_132038_model_2 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=7s
|
|========================================================== | 83%
## 13:27:34.365: DeepLearning_grid_2_AutoML_3_20220328_132038 [DeepLearning Grid Search] complete
## 13:27:34.365: Built: 3 models for HyperparamSearch : DeepLearning_grid_2_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:27:34.365: Adding model DeepLearning_grid_2_AutoML_3_20220328_132038_model_3 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=4s
## 13:27:34.382: Time assigned for DeepLearning_grid_3_AutoML_3_20220328_132038: 56.174s
## 13:27:34.382: AutoML: starting DeepLearning_grid_3_AutoML_3_20220328_132038 hyperparameter search
## 13:27:34.383: DeepLearning_grid_3_AutoML_3_20220328_132038 [DeepLearning Grid Search] started
|
|=========================================================== | 84%
|
|=========================================================== | 85%
|
|============================================================ | 85%
|
|============================================================ | 86%
|
|============================================================= | 87%
|
|============================================================= | 88%
|
|============================================================== | 88%
|
|============================================================== | 89%
|
|=============================================================== | 89%
|
|=============================================================== | 90%
|
|=============================================================== | 91%
|
|================================================================ | 91%
|
|================================================================ | 92%
## 13:28:19.880: Built: 1 models for HyperparamSearch : DeepLearning_grid_3_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:28:19.880: Adding model DeepLearning_grid_3_AutoML_3_20220328_132038_model_1 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=6s, total=44s
|
|================================================================= | 92%
|
|================================================================= | 93%
|
|================================================================== | 94%
## 13:28:27.945: Built: 2 models for HyperparamSearch : DeepLearning_grid_3_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:28:27.945: Adding model DeepLearning_grid_3_AutoML_3_20220328_132038_model_2 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=8s
|
|================================================================== | 95%
## 13:28:33.975: DeepLearning_grid_3_AutoML_3_20220328_132038 [DeepLearning Grid Search] complete
## 13:28:33.975: Built: 3 models for HyperparamSearch : DeepLearning_grid_3_AutoML_3_20220328_132038 [DeepLearning Grid Search]
## 13:28:33.975: Adding model DeepLearning_grid_3_AutoML_3_20220328_132038_model_3 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=1s, total=6s
## 13:28:33.996: Time assigned for StackedEnsemble_AllModels_4_AutoML_3_20220328_132038: 24.647s
## 13:28:33.996: AutoML: starting StackedEnsemble_AllModels_4_AutoML_3_20220328_132038 model training
## 13:28:33.997: StackedEnsemble_AllModels_4_AutoML_3_20220328_132038 [StackedEnsemble all_5 (built with AUTO metalearner, using all AutoML models)] started
|
|=================================================================== | 95%
## 13:28:34.998: StackedEnsemble_AllModels_4_AutoML_3_20220328_132038 [StackedEnsemble all_5 (built with AUTO metalearner, using all AutoML models)] complete
## 13:28:34.998: Adding model StackedEnsemble_AllModels_4_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:28:35.50: New leader: StackedEnsemble_AllModels_4_AutoML_3_20220328_132038, mean_residual_deviance: 0.3417307577994605
## 13:28:35.51: Time assigned for XGBoost_lr_search_selection_AutoML_3_20220328_132038: 3.725052490234375s
## 13:28:35.51: XGBoost_lr_search_selection_AutoML_3_20220328_132038 [XGBoost lr_search] started
## 13:28:35.51: Applying learning rate search on best XGBoost: XGBoost_grid_1_AutoML_3_20220328_132038_model_23
## 13:28:35.51: AutoML: starting XGBoost_lr_search_selection_AutoML_3_20220328_132038_select model training
|
|=================================================================== | 96%
|
|==================================================================== | 96%
## 13:28:41.68: XGBoost_lr_search_selection_AutoML_3_20220328_132038 [XGBoost lr_search] complete
## 13:28:41.69: Time assigned for GBM_lr_annealing_selection_AutoML_3_20220328_132038: 1.098375s
## 13:28:41.69: GBM_lr_annealing_selection_AutoML_3_20220328_132038 [GBM lr_annealing] started
## 13:28:41.69: Retraining best GBM with learning rate annealing: GBM_4_AutoML_3_20220328_132038
## 13:28:41.69: AutoML: starting GBM_lr_annealing_selection_AutoML_3_20220328_132038_select_model model training
|
|==================================================================== | 97%
## 13:28:44.85: GBM_lr_annealing_selection_AutoML_3_20220328_132038 [GBM lr_annealing] complete
## 13:28:44.88: No base models, due to timeouts or the exclude_algos option. Skipping StackedEnsemble 'monotonic'.
## 13:28:44.90: Time assigned for StackedEnsemble_BestOfFamily_5_AutoML_3_20220328_132038: 7.2765s
## 13:28:44.90: AutoML: starting StackedEnsemble_BestOfFamily_5_AutoML_3_20220328_132038 model training
## 13:28:44.90: StackedEnsemble_BestOfFamily_5_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_xgboost (built with xgboost metalearner, using top model from each algorithm type)] started
## 13:28:46.95: StackedEnsemble_BestOfFamily_5_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_xgboost (built with xgboost metalearner, using top model from each algorithm type)] complete
## 13:28:46.95: Adding model StackedEnsemble_BestOfFamily_5_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=1s, total=1s
## 13:28:46.120: Time assigned for StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038: 12.523s
## 13:28:46.120: AutoML: starting StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038 model training
## 13:28:46.120: StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_gbm (built with gbm metalearner, using top model from each algorithm type)] started
|
|==================================================================== | 98%
## 13:28:47.123: StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038 [StackedEnsemble best_of_family_gbm (built with gbm metalearner, using top model from each algorithm type)] complete
## 13:28:47.123: Adding model StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=0s, total=0s
## 13:28:47.145: Time assigned for StackedEnsemble_AllModels_5_AutoML_3_20220328_132038: 5.749s
## 13:28:47.145: AutoML: starting StackedEnsemble_AllModels_5_AutoML_3_20220328_132038 model training
## 13:28:47.146: StackedEnsemble_AllModels_5_AutoML_3_20220328_132038 [StackedEnsemble all_xgboost (built with xgboost metalearner, using all AutoML models)] started
|
|===================================================================== | 98%
|
|===================================================================== | 99%
## 13:28:53.163: StackedEnsemble_AllModels_5_AutoML_3_20220328_132038 [StackedEnsemble all_xgboost (built with xgboost metalearner, using all AutoML models)] complete
## 13:28:53.163: Adding model StackedEnsemble_AllModels_5_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=5s, total=5s
## 13:28:53.304: Time assigned for StackedEnsemble_AllModels_6_AutoML_3_20220328_132038: 5.339s
## 13:28:53.304: AutoML: starting StackedEnsemble_AllModels_6_AutoML_3_20220328_132038 model training
## 13:28:53.304: StackedEnsemble_AllModels_6_AutoML_3_20220328_132038 [StackedEnsemble all_gbm (built with gbm metalearner, using all AutoML models)] started
|
|======================================================================| 99%
|
|======================================================================| 100%
## 13:28:59.314: StackedEnsemble_AllModels_6_AutoML_3_20220328_132038 [StackedEnsemble all_gbm (built with gbm metalearner, using all AutoML models)] complete
## 13:28:59.314: Adding model StackedEnsemble_AllModels_6_AutoML_3_20220328_132038 to leaderboard Leaderboard_AutoML_3_20220328_132038@@quality. Training time: model=5s, total=5s
## 13:28:59.462: AutoML: out of time; skipping StackedEnsemble best_of_family_xglm (built with AUTO metalearner, using top model from each algorithm type)
## 13:28:59.462: AutoML: out of time; skipping StackedEnsemble all_xglm (built with AUTO metalearner, using all AutoML models)
## 13:28:59.462: AutoML: out of time; skipping completion resume_best_grids
## 13:28:59.462: AutoML: out of time; skipping StackedEnsemble best_of_family (built with AUTO metalearner, using top model from each algorithm type)
## 13:28:59.462: AutoML: out of time; skipping StackedEnsemble best_N (built with AUTO metalearner, using best 1000 non-SE models)
## 13:28:59.463: Actual modeling steps: [{XGBoost : [def_2 (1g, 10w)]}, {GLM : [def_1 (1g, 10w)]}, {GBM : [def_5 (1g, 10w)]}, {StackedEnsemble : [best_of_family_1 (1g, 5w)]}, {XGBoost : [def_1 (2g, 10w)]}, {DRF : [def_1 (2g, 10w)]}, {GBM : [def_2 (2g, 10w), def_3 (2g, 10w), def_4 (2g, 10w)]}, {StackedEnsemble : [best_of_family_2 (2g, 5w), all_2 (2g, 10w)]}, {XGBoost : [def_3 (3g, 10w)]}, {DRF : [XRT (3g, 10w)]}, {GBM : [def_1 (3g, 10w)]}, {DeepLearning : [def_1 (3g, 10w)]}, {StackedEnsemble : [best_of_family_3 (3g, 5w), all_3 (3g, 10w)]}, {XGBoost : [grid_1 (4g, 90w)]}, {GBM : [grid_1 (4g, 60w)]}, {DeepLearning : [grid_1 (4g, 30w)]}, {StackedEnsemble : [best_of_family_4 (4g, 5w), all_4 (4g, 10w)]}, {DeepLearning : [grid_2 (5g, 30w), grid_3 (5g, 30w)]}, {StackedEnsemble : [all_5 (5g, 10w)]}, {XGBoost : [lr_search (6g, 30w)]}, {GBM : [lr_annealing (6g, 10w)]}, {StackedEnsemble : [best_of_family_xgboost (6g, 10w), best_of_family_gbm (6g, 10w), all_xgboost (7g, 10w), all_gbm (7g, 10w)]}]
## 13:28:59.463: AutoML build stopped: 2022.03.28 13:28:59.463
## 13:28:59.463: AutoML build done: built 112 models
## 13:28:59.463: AutoML duration: 8 min 20.820 sec
## 13:28:59.489: Verifying training frame immutability. . .
## 13:28:59.489: Training frame was not mutated (as expected).
#project_name = "winequality_lb_frame")print(auto_ml@leaderboard)## model_id
## 1 StackedEnsemble_AllModels_4_AutoML_3_20220328_132038
## 2 StackedEnsemble_AllModels_3_AutoML_3_20220328_132038
## 3 StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038
## 4 XGBoost_grid_1_AutoML_3_20220328_132038_model_23
## 5 StackedEnsemble_AllModels_6_AutoML_3_20220328_132038
## 6 StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038
## mean_residual_deviance rmse mse mae rmsle
## 1 0.3417308 0.5845774 0.3417308 0.4224870 0.09487904
## 2 0.3418523 0.5846814 0.3418523 0.4228851 0.09488577
## 3 0.3470034 0.5890699 0.3470034 0.4181518 0.09536593
## 4 0.3586580 0.5988806 0.3586580 0.4128412 0.09667850
## 5 0.3599660 0.5999716 0.3599660 0.4124179 0.09667705
## 6 0.3606941 0.6005782 0.3606941 0.4294703 0.09711357
##
## [124 rows x 6 columns]
preds <- h2o.predict(auto_ml, testing)##
|
| | 0%
|
|======================================================================| 100%
head(preds)## predict
## 1 5.281472
## 2 5.674318
## 3 5.257546
## 4 5.491958
## 5 5.110115
## 6 5.341811
aml_perf <- h2o.performance(auto_ml@leader, testing)
aml_perf## H2ORegressionMetrics: stackedensemble
##
## MSE: 0.3417308
## RMSE: 0.5845774
## MAE: 0.422487
## RMSLE: 0.09487904
## Mean Residual Deviance : 0.3417308
explns <- h2o.explain(auto_ml, testing)
explns##
##
## Leaderboard
## ===========
##
## > Leaderboard shows models with their metrics. When provided with H2OAutoML object, the leaderboard shows 5-fold cross-validated metrics by default (depending on the H2OAutoML settings), otherwise it shows metrics computed on the newdata. At most 20 models are shown by default.
##
##
## | | model_id | mean_residual_deviance | rmse | mse | mae | rmsle | training_time_ms | predict_time_per_row_ms | algo
## |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
## | **1** |StackedEnsemble_AllModels_4_AutoML_3_20220328_132038 | 0.34173075779946 | 0.584577418140198 | 0.34173075779946 | 0.422486994582696 | 0.0948790416626287 | 410 | 0.173901 | StackedEnsemble |
## | **2** |StackedEnsemble_AllModels_3_AutoML_3_20220328_132038 | 0.341852324872332 | 0.58468138748581 | 0.341852324872332 | 0.422885129470241 | 0.0948857659281019 | 295 | 0.127177 | StackedEnsemble |
## | **3** |StackedEnsemble_BestOfFamily_4_AutoML_3_20220328_132038 | 0.347003387099779 | 0.589069933963514 | 0.347003387099779 | 0.418151833392541 | 0.0953659255241145 | 134 | 0.025286 | StackedEnsemble |
## | **4** |XGBoost_grid_1_AutoML_3_20220328_132038_model_23 | 0.358657982765416 | 0.598880608106003 | 0.358657982765416 | 0.412841177228744 | 0.0966784956313138 | 225 | 0.006157 | XGBoost |
## | **5** |StackedEnsemble_AllModels_6_AutoML_3_20220328_132038 | 0.359965970830161 | 0.599971641688306 | 0.359965970830161 | 0.412417865860213 | 0.0966770509167918 | 5465 | 0.353067 | StackedEnsemble |
## | **6** |StackedEnsemble_BestOfFamily_2_AutoML_3_20220328_132038 | 0.36069413184605 | 0.60057816464308 | 0.36069413184605 | 0.429470321751477 | 0.0971135673980101 | 127 | 0.02106 | StackedEnsemble |
## | **7** |StackedEnsemble_AllModels_1_AutoML_3_20220328_132038 | 0.361085245852051 | 0.60090369099553 | 0.361085245852051 | 0.429692952316934 | 0.0971590383550771 | 136 | 0.020642 | StackedEnsemble |
## | **8** |StackedEnsemble_BestOfFamily_3_AutoML_3_20220328_132038 | 0.363176298519046 | 0.602641102580173 | 0.363176298519046 | 0.431612534479501 | 0.0973796296966892 | 143 | 0.028888 | StackedEnsemble |
## | **9** |StackedEnsemble_AllModels_2_AutoML_3_20220328_132038 | 0.363285364178078 | 0.602731585515541 | 0.363285364178078 | 0.431678495710255 | 0.0973924155985525 | 150 | 0.036584 | StackedEnsemble |
## | **10** |XGBoost_grid_1_AutoML_3_20220328_132038_model_67 | 0.363688133275926 | 0.603065612745352 | 0.363688133275926 | 0.42452334994592 | 0.0971179502561038 | 75 | 0.002824 | XGBoost |
## | **11** |XGBoost_grid_1_AutoML_3_20220328_132038_model_36 | 0.366022135574077 | 0.604997632701217 | 0.366022135574077 | 0.416332703556506 | 0.0974408743035765 | 115 | 0.003092 | XGBoost |
## | **12** |XGBoost_grid_1_AutoML_3_20220328_132038_model_27 | 0.367012264570853 | 0.60581537168584 | 0.367012264570853 | 0.441774244235857 | 0.0975070471173697 | 99 | 0.002471 | XGBoost |
## | **13** |StackedEnsemble_BestOfFamily_6_AutoML_3_20220328_132038 | 0.3673955628927 | 0.606131638254184 | 0.3673955628927 | 0.427607279022001 | 0.0981867448545554 | 953 | 0.041077 | StackedEnsemble |
## | **14** |XGBoost_grid_1_AutoML_3_20220328_132038_model_9 | 0.367617424479608 | 0.606314624992345 | 0.367617424479608 | 0.419421732728251 | 0.0974121365197805 | 98 | 0.002855 | XGBoost |
## | **15** |XGBoost_grid_1_AutoML_3_20220328_132038_model_4 | 0.370844682749144 | 0.6089701821511 | 0.370844682749144 | 0.437434289056032 | 0.0981619102995089 | 126 | 0.002742 | XGBoost |
## | **16** |DRF_1_AutoML_3_20220328_132038 | 0.371111879392238 | 0.609189526660003 | 0.371111879392238 | 0.439125924824131 | 0.09846320517022 | 203 | 0.008351 | DRF |
## | **17** |GBM_4_AutoML_3_20220328_132038 | 0.372002651617509 | 0.609920201024289 | 0.372002651617509 | 0.457920355673032 | 0.098391357463785 | 147 | 0.006968 | GBM |
## | **18** |XGBoost_grid_1_AutoML_3_20220328_132038_model_54 | 0.373246707120016 | 0.610939200837543 | 0.373246707120016 | 0.434911443497324 | 0.0992024151313326 | 222 | 0.003224 | XGBoost |
## | **19** |XGBoost_grid_1_AutoML_3_20220328_132038_model_57 | 0.373336975083348 | 0.61101307275978 | 0.373336975083348 | 0.440965889674153 | 0.0988391912109151 | 93 | 0.002371 | XGBoost |
## | **20** |XGBoost_grid_1_AutoML_3_20220328_132038_model_68 | 0.37340669661824 | 0.611070124141444 | 0.37340669661824 | 0.433318214368094 | 0.0990560527229226 | 169 | 0.002763 | XGBoost |
##
##
## Residual Analysis
## =================
##
## > Residual Analysis plots the fitted values vs residuals on a test dataset. Ideally, residuals should be randomly distributed. Patterns in this plot can indicate potential problems with the model selection, e.g., using simpler model than necessary, not accounting for heteroscedasticity, autocorrelation, etc. Note that if you see "striped" lines of residuals, that is an artifact of having an integer valued (vs a real valued) response variable.
##
##
## Variable Importance
## ===================
##
## > The variable importance plot shows the relative importance of the most important variables in the model.
##
##
## Variable Importance Heatmap
## ===========================
##
## > Variable importance heatmap shows variable importance across multiple models. Some models in H2O return variable importance for one-hot (binary indicator) encoded versions of categorical columns (e.g. Deep Learning, XGBoost). In order for the variable importance of categorical columns to be compared across all model types we compute a summarization of the the variable importance across all one-hot encoded features and return a single variable importance for the original categorical feature. By default, the models and variables are ordered by their similarity.
##
##
## Model Correlation
## =================
##
## > This plot shows the correlation between the predictions of the models. For classification, frequency of identical predictions is used. By default, models are ordered by their similarity (as computed by hierarchical clustering).
## Interpretable models: GLM_1_AutoML_3_20220328_132038
##
##
## SHAP Summary
## ============
##
## > SHAP summary plot shows the contribution of the features for each instance (row of data). The sum of the feature contributions and the bias term is equal to the raw prediction of the model, i.e., prediction before applying inverse link function.
##
##
## Partial Dependence Plots
## ========================
##
## > Partial dependence plot (PDP) gives a graphical depiction of the marginal effect of a variable on the response. The effect of a variable is measured in change in the mean response. PDP assumes independence between the feature for which is the PDP computed and the rest.
##
##
## Individual Conditional Expectations
## ===================================
##
## > An Individual Conditional Expectation (ICE) plot gives a graphical depiction of the marginal effect of a variable on the response. ICE plots are similar to partial dependence plots (PDP); PDP shows the average effect of a feature while ICE plot shows the effect for a single instance. This function will plot the effect for each decile. In contrast to the PDP, ICE plots can provide more insight, especially when there is stronger feature interaction.
library(lime)
model_ids <-as.data.frame(auto_ml@leaderboard$model_id)[,1]
best_model <- h2o.getModel(grep("StackedEnsemble_BestOfFamily", model_ids, value=TRUE)[1])
explainer <- lime(as.data.frame(training[, -12]), best_model, bin_continuous = F) #remove 'Attrition' column keeping only predictors
explanation <- explain(as.data.frame(testing[, -12]), #cherry picked rows for explaining pusposes
explainer = explainer,
kernel_width = 1,
n_features = 5, #max features to explain each model
n_labels = 1) ## Warning in explain.data.frame(as.data.frame(testing[, -12]), explainer =
## explainer, : "labels" and "n_labels" arguments are ignored when explaining
## regression models
##
|
| | 0%
|
|======================================================================| 100%
##
|
| | 0%
|
|================================ | 45%
|
|======================================================================| 100%
plot_features(explanation)